首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   86566篇
  免费   1091篇
  国内免费   416篇
电工技术   826篇
综合类   2317篇
化学工业   12102篇
金属工艺   4825篇
机械仪表   3092篇
建筑科学   2205篇
矿业工程   564篇
能源动力   1205篇
轻工业   3763篇
水利工程   1279篇
石油天然气   349篇
无线电   9556篇
一般工业技术   16833篇
冶金工业   3193篇
原子能技术   292篇
自动化技术   25672篇
  2023年   31篇
  2022年   50篇
  2021年   98篇
  2020年   94篇
  2019年   83篇
  2018年   14525篇
  2017年   13432篇
  2016年   10025篇
  2015年   670篇
  2014年   321篇
  2013年   403篇
  2012年   3257篇
  2011年   9513篇
  2010年   8360篇
  2009年   5651篇
  2008年   6856篇
  2007年   7862篇
  2006年   207篇
  2005年   1276篇
  2004年   1175篇
  2003年   1215篇
  2002年   583篇
  2001年   119篇
  2000年   221篇
  1999年   123篇
  1998年   261篇
  1997年   163篇
  1996年   143篇
  1995年   68篇
  1994年   75篇
  1993年   65篇
  1992年   49篇
  1991年   52篇
  1989年   42篇
  1988年   39篇
  1987年   30篇
  1981年   29篇
  1976年   36篇
  1969年   31篇
  1968年   48篇
  1967年   34篇
  1966年   44篇
  1965年   46篇
  1960年   31篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Exploring massive mobile data for location-based services becomes one of the key challenges in mobile data mining. In this paper, we investigate a problem of finding a correlation between the collective behavior of mobile users and the distribution of points of interest (POIs) in a city. Specifically, we use large-scale cell tower data dumps collected from cell towers and POIs extracted from a popular social network service, Weibo. Our objective is to make use of the data from these two different types of sources to build a model for predicting the POI densities of different regions in the covered area. An application domain that may benefit from our research is a business recommendation application, where a prediction result can be used as a recommendation for opening a new store/branch. The crux of our contribution is the method of representing the collective behavior of mobile users as a histogram of connection counts over a period of time in each region. This representation ultimately enables us to apply a supervised learning algorithm to our problem in order to train a POI prediction model using the POI data set as the ground truth. We studied 12 state-of-the-art classification and regression algorithms; experimental results demonstrate the feasibility and effectiveness of the proposed method.  相似文献   
992.
As software systems continue to play an important role in our daily lives, their quality is of paramount importance. Therefore, a plethora of prior research has focused on predicting components of software that are defect-prone. One aspect of this research focuses on predicting software changes that are fix-inducing. Although the prior research on fix-inducing changes has many advantages in terms of highly accurate results, it has one main drawback: It gives the same level of impact to all fix-inducing changes. We argue that treating all fix-inducing changes the same is not ideal, since a small typo in a change is easier to address by a developer than a thread synchronization issue. Therefore, in this paper, we study high impact fix-inducing changes (HIFCs). Since the impact of a change can be measured in different ways, we first propose a measure of impact of the fix-inducing changes, which takes into account the implementation work that needs to be done by developers in later (fixing) changes. Our measure of impact for a fix-inducing change uses the amount of churn, the number of files and the number of subsystems modified by developers during an associated fix of the fix-inducing change. We perform our study using six large open source projects to build specialized models that identify HIFCs, determine the best indicators of HIFCs and examine the benefits of prioritizing HIFCs. Using change factors, we are able to predict 56 % to 77 % of HIFCs with an average false alarm (misclassification) rate of 16 %. We find that the lines of code added, the number of developers who worked on a change, and the number of prior modifications on the files modified during a change are the best indicators of HIFCs. Lastly, we observe that a specialized model for HIFCs can provide inspection effort savings of 4 % over the state-of-the-art models. We believe our results would help practitioners prioritize their efforts towards the most impactful fix-inducing changes and save inspection effort.  相似文献   
993.
Reuse of software components, either closed or open source, is considered to be one of the most important best practices in software engineering, since it reduces development cost and improves software quality. However, since reused components are (by definition) generic, they need to be customized and integrated into a specific system before they can be useful. Since this integration is system-specific, the integration effort is non-negligible and increases maintenance costs, especially if more than one component needs to be integrated. This paper performs an empirical study of multi-component integration in the context of three successful open source distributions (Debian, Ubuntu and FreeBSD). Such distributions integrate thousands of open source components with an operating system kernel to deliver a coherent software product to millions of users worldwide. We empirically identified seven major integration activities performed by the maintainers of these distributions, documented how these activities are being performed by the maintainers, then evaluated and refined the identified activities with input from six maintainers of the three studied distributions. The documented activities provide a common vocabulary for component integration in open source distributions and outline a roadmap for future research on software integration.  相似文献   
994.
Communication in global software development is hindered by language differences in countries with a lack of English speaking professionals. Machine translation is a technology that uses software to translate from one natural language to another. The progress of machine translation systems has been steady in the last decade. As for now, machine translation technology is particularly appealing because it might be used, in the form of cross-language chat services, in countries that are entering into global software projects. However, despite the recent progress of the technology, we still lack a thorough understanding of how real-time machine translation affects communication. In this paper, we present a set of empirical studies with the goal of assessing to what extent real-time machine translation can be used in distributed, multilingual requirements meetings instead of English. Results suggest that, despite far from 100 % accurate, real-time machine translation is not disruptive of the conversation flow and, therefore, is accepted with favor by participants. However, stronger effects can be expected to emerge when language barriers are more critical. Our findings add to the evidence about the recent advances of machine translation technology and provide some guidance to global software engineering practitioners in regarding the losses and gains of using English as a lingua franca in multilingual group communication, as in the case of computer-mediated requirements meetings.  相似文献   
995.
When interacting with source control management system, developers often commit unrelated or loosely related code changes in a single transaction. When analyzing version histories, such tangled changes will make all changes to all modules appear related, possibly compromising the resulting analyses through noise and bias. In an investigation of five open-source Java projects, we found between 7 % and 20 % of all bug fixes to consist of multiple tangled changes. Using a multi-predictor approach to untangle changes, we show that on average at least 16.6 % of all source files are incorrectly associated with bug reports. These incorrect bug file associations seem to not significantly impact models classifying source files to have at least one bug or no bugs. But our experiments show that untangling tangled code changes can result in more accurate regression bug prediction models when compared to models trained and tested on tangled bug datasets—in our experiments, the statistically significant accuracy improvements lies between 5 % and 200 %. We recommend better change organization to limit the impact of tangled changes.  相似文献   
996.
Several code smell detection tools have been developed providing different results, because smells can be subjectively interpreted, and hence detected, in different ways. In this paper, we perform the largest experiment of applying machine learning algorithms to code smells to the best of our knowledge. We experiment 16 different machine-learning algorithms on four code smells (Data Class, Large Class, Feature Envy, Long Method) and 74 software systems, with 1986 manually validated code smell samples. We found that all algorithms achieved high performances in the cross-validation data set, yet the highest performances were obtained by J48 and Random Forest, while the worst performance were achieved by support vector machines. However, the lower prevalence of code smells, i.e., imbalanced data, in the entire data set caused varying performances that need to be addressed in the future studies. We conclude that the application of machine learning to the detection of these code smells can provide high accuracy (>96 %), and only a hundred training examples are needed to reach at least 95 % accuracy.  相似文献   
997.
We are witnessing a significant growth in the number of smartphone users and advances in phone hardware and sensor technology. In conjunction with the popularity of video applications such as YouTube, an unprecedented number of user-generated videos (UGVs) are being generated and consumed by the public, which leads to a Big Data challenge in social media. In a very large video repository, it is difficult to index and search videos in their unstructured form. However, due to recent development, videos can be geo-tagged (e.g., locations from GPS receiver and viewing directions from digital compass) at the acquisition time, which can provide potential for efficient management of video data. Ideally, each video frame can be tagged by the spatial extent of its coverage area, termed Field-Of-View (FOV). This effectively converts a challenging video management problem into a spatial database problem. This paper attacks the challenges of large-scale video data management using spatial indexing and querying of FOVs, especially maximally harnessing the geographical properties of FOVs. Since FOVs are shaped similar to slices of pie and contain both location and orientation information, conventional spatial indexes, such as R-tree, cannot index them efficiently. The distribution of UGVs’ locations is non-uniform (e.g., more FOVs in popular locations). Consequently, even multilevel grid-based indexes, which can handle both location and orientation, have limitations in managing the skewed distribution. Additionally, since UGVs are usually captured in a casual way with diverse setups and movements, no a priori assumption can be made to condense them in an index structure. To overcome the challenges, we propose a class of new R-tree-based index structures that effectively harness FOVs’ camera locations, orientations and view-distances, in tandem, for both filtering and optimization. We also present novel search strategies and algorithms for efficient range and directional queries on our indexes. Our experiments using both real-world and large synthetic video datasets (over 30 years’ worth of videos) demonstrate the scalability and efficiency of our proposed indexes and search algorithms.  相似文献   
998.
Tracking the spatio-temporal activity is highly relevant for domains like security, health, and quality management. Since animal welfare became a topic in politics and legislation locomotion patterns of livestock have received increasing interest. In contrast to the monitoring of pedestrians cattle activity tracking poses special challenges to both sensors and data analysis. Interesting states are not directly observable by a single sensor. In addition, sensors must be accepted by cattle and need to be robust enough to cope with a rough environment. In this article, we introduce the novel combination of heart rate and positioning sensors. Attached to neck and chest they are less interfering than accelerometers at the ankles. Exploiting the potential of such combined sensor system that records locomotion and non-spatial information from the heart rate sensor however is challenging. We introduce a novel two level method for the activity tracking focused on the duration and sequence of activity states. We combine Support Vector Machine (SVM) with Conditional Random Field (CRF) and extend Conditional Random fields by an explicit representation of duration. The SVM characterizes local activity states, whereas the CRF addresses sequences of local states to sequences incorporating spatial and non-spatial contextual knowledge. This combination provides a reliable and comprehensive identification of defined activity patterns, as well as their chronology and durations, suitable for the integration in an activity data base. This data base is used to extract physiological parameters and promises insights into internal states such as fitness, well-being and stress. Interestingly we were able to demonstrate a significant correlation between resting pulse rate and the day of pregnancy.  相似文献   
999.
1000.
An important task of speaker verification is to generate speaker specific models and match an input speaker’s utterance with these models. This paper focuses on comparing the performance of text dependent speaker verification system using Mel Frequency Cepstral Coefficients feature and different Vector Quantization (VQ) based speaker modelling techniques to generate the speaker specific models. Speaker-specific information is mainly represented by spectral features and using these features we have developed the model which serves as an important entity for determining the claimed identity of the speaker. In the modelling part, we used Linde, Buzo, Gray (LBG) VQ, proposed adaptive LBG VQ and Fuzzy C Means (FCM) VQ for generating speaker specific model. The experimental results that are performed on microphonic database shows that accuracy significantly depends on the size of the codebook in all VQ techniques, and on FCM VQ accuracy also depend on the value of learning parameter of the objective function. Experiment results shows that how the accuracy of speaker verification system is depend on different representations of the codebook, different size of codebook in VQ modelling techniques and learning parameter in FCM VQ.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号